Monte Carlo Methods for Stimulus Generation

Simon Barthelmé, CNRS, Gipsa-lab (Grenoble)

November 2, 2015

Generating Random Stimuli

Three reasons why people use random stimuli in psychophysics: 1. Make an easy task difficult (external noise) 2. Randomly perturb a system to study its behaviour (classification images) 3. Need stimuli with controlled statistics (many examples)

I haven’t got much to say about (1), so will only talk about (2) and (3)

Part I: Random Perturbations

I’ll start with a neat study by Kontsevich & Tyler (2004)

What makes Mona Lisa smile?

What they did

Results

Results

Noise = Random exploration of stimulus space

Noise = Random exploration of stimulus space

Noise = Random exploration of stimulus space

Perturbations may tell you how decisions are made

White noise perturbations

White (Gaussian) pixel noise corresponds to sampling each pixel independently from the same Gaussian distribution.

The geometry of white noise perturbations

White noise perturbations around your stimulus: \[ \mathbf{x} = \mathbf{u} + \mathbf{z} \] with $z_i N(0,_2) $

A consequence of using white noise is that all directions of perturbation are equally likely (white noise is spherical).

Is white noise efficient?

Pixel space is very very big! The fact that white noise samples all directions with equal probability means it is sometimes very inefficient.

Example: how old does Mona Lisa look?

Inefficiency of white noise

Here most directions leave the quantity of interest invariant. Eventually white noise will produce a change in perceived age but that might take forever. In such cases we have to be more sophisticated in our stimulus design.

Transformations

Stimuli

Olman & Kersten, results

Conclusion so far

Part II: generating stimuli with controlled statistics

Extremely common scenario: the experimental setting requires that you control for some low-level aspect of the stimuli, like mean luminance, or contrast, or salience, etc.

  1. What sort of control do we mean, exactly? Control in expectation vs. control in sample
  2. How do we generate random stimuli while controlling for XYZ?

Control in expectation vs. control in sample

Take the problem of controlling the mean luminance of a noise patch.

What’s the difference between:

patch <- imnoise(40,40)

and:

patch2 <- imnoise(40,40)
patch2 <- patch2 - mean(patch2)

?

Control in expectation vs. control in sample

In the first example, we generate a patch of noise by sampling each value from a Gaussian: \[ x_i \sim N(0,1) \] for i = 1 … Npixels The expected value of \(x_i\) equals 0, but the actual mean of the patch won’t be 0

patch <- imnoise(40,40)
mean(patch)
## [1] -0.0175703

Control in expectation vs. control in sample

In the second example, we also generate \[ x_i \sim N(0,1) \] for i = 1 … Npixels and then subtract the sample mean

patch2 <- imnoise(40,40)
patch2 <- patch2 - mean(patch2)
mean(patch2)
## [1] -1.172958e-17

The sample mean now equals 0 (up to numerical error)

Control in expectation vs. control in sample

The first method of patch generation controls luminance in expecation, i.e. \[ E(\mathbf{x}) = 0 \] where the expectation is over different noise patches (realisations). The second method controls luminance in sample: \[ \frac{1}{n} \sum x_i = 0\] for all noise patches, summing over pixels.

Controlling contrast in expectation

Contrast is usually defined as the standard deviation of luminance values

patch <- imnoise(40,40)

gives us contrast control in expectation. We can verify this by computing the contrast of 1,000 random patches:

replicate(1000,sd(imnoise(40,40))) %>% mean
## [1] 1.000133

Controlling contrast in sample

Here’s a way to have in-sample control:

patch2 <- imnoise(40,40)
patch2 <- (patch2-mean(patch2))/sd(patch2)
mean(patch2)
## [1] 4.087849e-18
sd(patch2)
## [1] 1

When do the two methods differ?

1. We have very small patches

This one is the most obvious. In small samples, the sample average can be quite different from its expectation.

mean.lum <- replicate(1e3,mean(imnoise(5,5)))
hist(mean.lum,main="Mean luminance value",xlab="")

2. Our noise is non-white

In non-white noise samples are not IID anymore but correlated. There are many ways of obtaining coloured noise, and the easiest is perhaps to filter it:

filter <- imfill(30,30,val=1/30) #A box filter 
fnoise <- convolve(imnoise(500,500),filter,FALSE) #Apply filter
plot(fnoise)

Coloured noise means lower variability within samples, more variability across

In coloured noise, high values tend to persist over space (the same goes for low values). This means coloured noise can stay far away from its average value, and so you’ll see more variability across samples.

filter <- imfill(4,4,val=1/4)
m.white <- replicate(1e3,mean(imnoise(50,50)))
m.col <- replicate(1e3,mean(convolve(imnoise(50,50),filter,FALSE)))
plot(m.col,xlab="Realisation",ylab="Mean luminance in patch",col="pink",pch=19)
points(m.white,col="black")

3. Strong constraints

Imagine an experiment looking at left. vs right preferences in eye movements with a stimulus set made up of \(k\) pictures. We want to control contrast in the left vs. right part of pictures.

Strong in-sample constraints can lead to weird-looking stimuli

Here’s Mona Lisa in equalised contrast:

mona %>% grayscale %>% imsplit("x",2) %>% llply(function(v) (v-mean(v))/sd(v)) %>% imappend("x") %>% plot

In-sample vs. in-expectation control: Algorithms

Iterated projections

From Wikipedia.

(More on the board if time allows)

Portilla & Simoncelli, 2000

Julesz conjecture: textures are visual stimuli that are spatially homogeneous, and so the brain only retains a few summary statistics. Instead of the full stimulus \(\mathbf{x} \in \mathbb{R}^n\), you only retain \(s(\mathbf{x}) \in \mathbb{R}^m\), \(m\lln\).

Portilla & Simoncelli tried to: - Identified the set of summary statistics \(s(\mathbf{x})\) - Given \(s(\mathbf{x})\), any \(y\) s.t. \(s(\mathbf{y})=s(\mathbf{x})\) should be perceptually equivalent (“metameric”) - Used a projection algorithm to find such “metameres”.

Portilla & Simoncelli, 2000

Taken from a slide by Aude Oliva (MIT)

Conclusion